Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
iScience ; 27(4): 109519, 2024 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-38595795

RESUMEN

Efficient solution of physical boundary value problems (BVPs) remains a challenging task demanded in many applications. Conventional numerical methods require time-consuming domain discretization and solving techniques that have limited throughput capabilities. Here, we present an efficient data-driven DNN approach to non-iterative solving arbitrary 2D linear elastic BVPs. Our results show that a U-Net-based surrogate model trained on a representative set of reference FDM solutions can accurately emulate linear elastic material behavior with manifold applications in deformable modeling and simulation.

2.
Plant Phenomics ; 6: 0155, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38476818

RESUMEN

Detection of spikes is the first important step toward image-based quantitative assessment of crop yield. However, spikes of grain plants occupy only a tiny fraction of the image area and often emerge in the middle of the mass of plant leaves that exhibit similar colors to spike regions. Consequently, accurate detection of grain spikes renders, in general, a non-trivial task even for advanced, state-of-the-art deep neural networks (DNNs). To improve pattern detection in spikes, we propose architectural changes to Faster-RCNN (FRCNN) by reducing feature extraction layers and introducing a global attention module. The performance of our extended FRCNN-A vs. conventional FRCNN was compared on images of different European wheat cultivars, including "difficult" bushy phenotypes from 2 different phenotyping facilities and optical setups. Our experimental results show that introduced architectural adaptations in FRCNN-A helped to improve spike detection accuracy in inner regions. The mean average precision (mAP) of FRCNN and FRCNN-A on inner spikes is 76.0% and 81.0%, respectively, while on the state-of-the-art detection DNNs, Swin Transformer mAP is 83.0%. As a lightweight network, FRCNN-A is faster than FRCNN and Swin Transformer on both baseline and augmented training datasets. On the FastGAN augmented dataset, FRCNN achieved a mAP of 84.24%, FRCNN-A attained a mAP of 85.0%, and the Swin Transformer achieved a mAP of 89.45%. The increase in mAP of DNNs on the augmented datasets is proportional to the amount of the IPK original and augmented images. Overall, this study indicates a superior performance of attention mechanisms-based deep learning models in detecting small and subtle features of grain spikes.

3.
Sci Rep ; 13(1): 9116, 2023 Jun 05.
Artículo en Inglés | MEDLINE | ID: mdl-37277366

RESUMEN

Efficient solution of partial differential equations (PDEs) of physical laws is of interest for manifold applications in computer science and image analysis. However, conventional domain discretization techniques for numerical solving PDEs such as Finite Difference (FDM), Finite Element (FEM) methods are unsuitable for real-time applications and are also quite laborious in adaptation to new applications, especially for non-experts in numerical mathematics and computational modeling. More recently, alternative approaches to solving PDEs using the so-called Physically Informed Neural Networks (PINNs) received increasing attention because of their straightforward application to new data and potentially more efficient performance. In this work, we present a novel data-driven approach to solve 2D Laplace PDE with arbitrary boundary conditions using deep learning models trained on a large set of reference FDM solutions. Our experimental results show that both forward and inverse 2D Laplace problems can efficiently be solved using the proposed PINN approach with nearly real-time performance and average accuracy of 94% for different types of boundary value problems compared to FDM. In summary, our deep learning based PINN PDE solver provides an efficient tool with various applications in image analysis and computational simulation of image-based physical boundary value problems.

4.
Br J Oral Maxillofac Surg ; 61(2): 152-157, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36658060

RESUMEN

Orbital decompression is an established procedure used to correct exophthalmos that results from excess orbital soft tissue. This study aimed to explore a new minimally-invasive technique that features three-dimensional planning and patient-specific implants for lateral valgisation (LAVA) of the orbital wall. We analysed the outcomes of this procedure in nine endocrine orbitopathy (EO) patients (32-65 years of age with a mean clinical activity score of 4.3) who underwent this procedure between 2021 and 2022, including seven patients diagnosed with dysthyroid optic neuropathy. The impact of LAVA and wall resection on orbital areas, volumes, Hertel values, visual acuity, and new-onset diplopia was determined. Among our results, we found that LAVA and resection of 18 orbital walls resulted in significant enlargement of the orbital volume from a preoperative mean of 30.8 ± 3.5 cm3 to a mean of 37.3 ± 5.8 cm3 postoperatively (mean difference, 6.2 ± 1.8 cm3; p < 0.001); this procedure also resulted in a significant reduction in the mean Hertel value, from 28.7 ± 1.9 mm to 20.0 ± 1.9 mm (mean difference, 8.7 ± 1.9 mm; p < 0.001). The procedure resulted in visual acuity declined in three patients (33.3 %) with reductions from 0.25 to 0.125, 0.8 to 0.125, and 1.2 to 0.7, respectively. No new diplopia occurred postoperatively, however, our study included five patients with preoperative diplopia that did not improve postoperatively and required additional surgical intervention. Similarly, four patients required supplemental eyelid surgery. In conclusion, our study suggests the effects of the LAVA with the partial floor resection seems to be effective, which provides a substantially improved outcome for patients undergoing surgical treatment of EO via the use of double navigation and piezosurgical methods.


Asunto(s)
Exoftalmia , Oftalmopatía de Graves , Humanos , Oftalmopatía de Graves/diagnóstico , Oftalmopatía de Graves/cirugía , Órbita/cirugía , Diplopía , Estudios Retrospectivos , Descompresión Quirúrgica/métodos , Exoftalmia/cirugía
5.
Plant Phenomics ; 5: 0081, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38235124

RESUMEN

Consideration of the properties of awns is important for the phenotypic description of grain crops. Awns have a number of important functions in grasses, including assimilation, mechanical protection, and seed dispersal and burial. An important feature of the awn is the presence or absence of barbs-tiny hook-like single-celled trichomes on the outer awn surface that can be visualized using microscopic imaging. There are, however, no suitable software tools for the automated analysis of these small, semi-transparent structures in a high-throughput manner. Furthermore, automated analysis of barbs using conventional methods of pattern detection and segmentation is hampered by high variability of their optical appearance including size, shape, and surface density. In this work, we present a software tool for automated detection and phenotyping of barbs in microscopic images of awns, which is based on a dedicated deep learning model (BarbNet). Our experimental results show that BarbNet is capable of detecting barb structures in different awn phenotypes with an average accuracy of 90%. Furthermore, we demonstrate that phenotypic traits derived from BarbNet-segmented images enable a quite robust categorization of 4 contrasting awn phenotypes with an accuracy of >85%. Based on the promising results of this work, we see that the proposed model has potential applications in the automation of barley awns sorting for plant developmental analysis.

6.
Front Plant Sci ; 13: 906410, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35909752

RESUMEN

Background: Automated analysis of large image data is highly demanded in high-throughput plant phenotyping. Due to large variability in optical plant appearance and experimental setups, advanced machine and deep learning techniques are required for automated detection and segmentation of plant structures in complex optical scenes. Methods: Here, we present a GUI-based software tool (DeepShoot) for efficient, fully automated segmentation and quantitative analysis of greenhouse-grown shoots which is based on pre-trained U-net deep learning models of arabidopsis, maize, and wheat plant appearance in different rotational side- and top-views. Results: Our experimental results show that the developed algorithmic framework performs automated segmentation of side- and top-view images of different shoots acquired at different developmental stages using different phenotyping facilities with an average accuracy of more than 90% and outperforms shallow as well as conventional and encoder backbone networks in cross-validation tests with respect to both precision and performance time. Conclusion: The DeepShoot tool presented in this study provides an efficient solution for automated segmentation and phenotypic characterization of greenhouse-grown plant shoots suitable also for end-users without advanced IT skills. Primarily trained on images of three selected plants, this tool can be applied to images of other plant species exhibiting similar optical properties.

7.
Eur J Med Res ; 27(1): 92, 2022 Jun 13.
Artículo en Inglés | MEDLINE | ID: mdl-35698208

RESUMEN

Endocrine orbitopathy is typically treated by resecting orbital walls. This procedure reduces intraorbital pressure by releasing intraorbital tissue, effectively alleviating the symptoms. However, selection of an appropriate surgical plan for treatment of endocrine orbitopathy requires careful consideration because predicting the effects of one-, two-, or three-wall resections on the release of orbital tissues is difficult. Here, based on our experience, we describe two specific orbital sites ('key points') that may significantly improve decompression results. Methodological framework of this work is mainly based on comparative analysis pre- and post-surgery tomographic images as well as image- and physics-based simulation of soft tissue outcome using the finite element modelling of mechanical soft tissue behaviour. Thereby, the optimal set of unknown modelling parameters was obtained iteratively from the minimum difference between model predictions and post-surgery ground truth data. This report presents a pre-/post-surgery study indicating a crucial role of these particular key points in improving the post-surgery outcome of decompression treatment of endocrine orbitopathy which was also supported by 3D biomechanical simulation of alternative two-wall resection plans. In particular, our experimental results show a nearly linear relationship between the resection area and amount of tissue released in the extraorbital space. However, a disproportionately higher volume of orbital outflow could be achieved under consideration of the two special key points. Our study demonstrates the importance of considering natural biomechanical obstacles to improved outcomes in two-wall resection treatment of endocrine orbitopathy. Further investigations of alternative surgery scenarios and post-surgery data are required to generalize the insights of this feasibility study.


Asunto(s)
Oftalmopatía de Graves , Descompresión Quirúrgica , Oftalmopatía de Graves/cirugía , Humanos , Órbita/cirugía , Estudios Retrospectivos , Resultado del Tratamiento
8.
Sensors (Basel) ; 21(22)2021 Nov 09.
Artículo en Inglés | MEDLINE | ID: mdl-34833515

RESUMEN

Automated analysis of small and optically variable plant organs, such as grain spikes, is highly demanded in quantitative plant science and breeding. Previous works primarily focused on the detection of prominently visible spikes emerging on the top of the grain plants growing in field conditions. However, accurate and automated analysis of all fully and partially visible spikes in greenhouse images renders a more challenging task, which was rarely addressed in the past. A particular difficulty for image analysis is represented by leaf-covered, occluded but also matured spikes of bushy crop cultivars that can hardly be differentiated from the remaining plant biomass. To address the challenge of automated analysis of arbitrary spike phenotypes in different grain crops and optical setups, here, we performed a comparative investigation of six neural network methods for pattern detection and segmentation in RGB images, including five deep and one shallow neural network. Our experimental results demonstrate that advanced deep learning methods show superior performance, achieving over 90% accuracy by detection and segmentation of spikes in wheat, barley and rye images. However, spike detection in new crop phenotypes can be performed more accurately than segmentation. Furthermore, the detection and segmentation of matured, partially visible and occluded spikes, for which phenotypes substantially deviate from the training set of regular spikes, still represent a challenge to neural network models trained on a limited set of a few hundreds of manually labeled ground truth images. Limitations and further potential improvements of the presented algorithmic frameworks for spike image analysis are discussed. Besides theoretical and experimental investigations, we provide a GUI-based tool (SpikeApp), which shows the application of pre-trained neural networks to fully automate spike detection, segmentation and phenotyping in images of greenhouse-grown plants.


Asunto(s)
Redes Neurales de la Computación , Fitomejoramiento , Grano Comestible , Procesamiento de Imagen Asistido por Computador , Hojas de la Planta
9.
Sci Rep ; 11(1): 16047, 2021 08 06.
Artículo en Inglés | MEDLINE | ID: mdl-34362967

RESUMEN

High-throughput root phenotyping in the soil became an indispensable quantitative tool for the assessment of effects of climatic factors and molecular perturbation on plant root morphology, development and function. To efficiently analyse a large amount of structurally complex soil-root images advanced methods for automated image segmentation are required. Due to often unavoidable overlap between the intensity of fore- and background regions simple thresholding methods are, generally, not suitable for the segmentation of root regions. Higher-level cognitive models such as convolutional neural networks (CNN) provide capabilities for segmenting roots from heterogeneous and noisy background structures, however, they require a representative set of manually segmented (ground truth) images. Here, we present a GUI-based tool for fully automated quantitative analysis of root images using a pre-trained CNN model, which relies on an extension of the U-Net architecture. The developed CNN framework was designed to efficiently segment root structures of different size, shape and optical contrast using low budget hardware systems. The CNN model was trained on a set of 6465 masks derived from 182 manually segmented near-infrared (NIR) maize root images. Our experimental results show that the proposed approach achieves a Dice coefficient of 0.87 and outperforms existing tools (e.g., SegRoot) with Dice coefficient of 0.67 by application not only to NIR but also to other imaging modalities and plant species such as barley and arabidopsis soil-root images from LED-rhizotron and UV imaging systems, respectively. In summary, the developed software framework enables users to efficiently analyse soil-root images in an automated manner (i.e. without manual interaction with data and/or parameter tuning) providing quantitative plant scientists with a powerful analytical tool.

11.
PeerJ ; 8: e10373, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33362957

RESUMEN

Silibinin (SIL), a natural flavonolignan from the milk thistle (Silybum marianum), is known to exhibit remarkable hepatoprotective, antineoplastic and EMT inhibiting effects in different cancer cells by targeting multiple molecular targets and pathways. However, the predominant majority of previous studies investigated effects of this phytocompound in a one particular cell line. Here, we carry out a systematic analysis of dose-dependent viability response to SIL in five non-small cell lung cancer (NSCLC) lines that gradually differ with respect to their intrinsic EMT stage. By correlating gene expression profiles of NSCLC cell lines with the pattern of their SIL IC50 response, a group of cell cycle, survival and stress responsive genes, including some prominent targets of STAT3 (BIRC5, FOXM1, BRCA1), was identified. The relevancy of these computationally selected genes to SIL viability response of NSCLC cells was confirmed by the transient knockdown test. In contrast to other EMT-inhibiting compounds, no correlation between the SIL IC50 and the intrinsic EMT stage of NSCLC cells was observed. Our experimental results show that SIL viability response of differently constituted NSCLC cells is linked to a subnetwork of tightly interconnected genes whose transcriptomic pattern can be used as a benchmark for assessment of individual SIL sensitivity instead of the conventional EMT signature. Insights gained in this study pave the way for optimization of customized adjuvant therapy of malignancies using Silibinin.

12.
Front Plant Sci ; 11: 1254, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32973827

RESUMEN

Development of live imaging techniques for providing information how chromatin is organized in living cells is pivotal to decipher the regulation of biological processes. Here, we demonstrate the improvement of a live imaging technique based on CRISPR/Cas9. In this approach, the sgRNA scaffold is fused to RNA aptamers including MS2 and PP7. When the dead Cas9 (dCas9) is co-expressed with chimeric sgRNA, the fluorescent coat protein-tagged for MS2 and PP7 aptamers (tdMCP-FP and tdPCP-FP) are recruited to the targeted sequence. Compared to previous work with dCas9:GFP, we show that the quality of telomere labeling was improved in transiently transformed Nicotiana benthamiana using aptamer-based CRISPR-imaging constructs. Labeling is influenced by the copy number of aptamers and less by the promoter types. The same constructs were not applicable for labeling of repeats in stably transformed plants and roots. The constant interaction of the RNP complex with its target DNA might interfere with cellular processes.

13.
Plant Methods ; 16: 95, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32670387

RESUMEN

BACKGROUND: Automated segmentation of large amount of image data is one of the major bottlenecks in high-throughput plant phenotyping. Dynamic optical appearance of developing plants, inhomogeneous scene illumination, shadows and reflections in plant and background regions complicate automated segmentation of unimodal plant images. To overcome the problem of ambiguous color information in unimodal data, images of different modalities can be combined to a virtual multispectral cube. However, due to motion artefacts caused by the relocation of plants between photochambers the alignment of multimodal images is often compromised by blurring artifacts. RESULTS: Here, we present an approach to automated segmentation of greenhouse plant images which is based on co-registration of fluorescence (FLU) and of visible light (VIS) camera images followed by subsequent separation of plant and marginal background regions using different species- and camera view-tailored classification models. Our experimental results including a direct comparison with manually segmented ground truth data show that images of different plant types acquired at different developmental stages from different camera views can be automatically segmented with the average accuracy of 93 % ( S D = 5 % ) using our two-step registration-classification approach. CONCLUSION: Automated segmentation of arbitrary greenhouse images exhibiting highly variable optical plant and background appearance represents a challenging task to data classification techniques that rely on detection of invariances. To overcome the limitation of unimodal image analysis, a two-step registration-classification approach to combined analysis of fluorescent and visible light images was developed. Our experimental results show that this algorithmic approach enables accurate segmentation of different FLU/VIS plant images suitable for application in fully automated high-throughput manner.

14.
Front Plant Sci ; 11: 666, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32655586

RESUMEN

Spike is one of the crop yield organs in wheat plants. Determination of the phenological stages, including heading time point (HTP), and area of spike from non-invasive phenotyping images provides the necessary information for the inference of growth-related traits. The algorithm previously developed by Qiongyan et al. for spike detection in 2-D images turns out to be less accurate when applied to the European cultivars that produce many more leaves. Therefore, we here present an improved and extended method where (i) wavelet amplitude is used as an input to the Laws texture energy-based neural network instead of original grayscale images and (ii) non-spike structures (e.g., leaves) are subsequently suppressed by combining the result of the neural network prediction with a Frangi-filtered image. Using this two-step approach, a 98.6% overall accuracy of neural network segmentation based on direct comparison with ground-truth data could be achieved. Moreover, the comparative error rate in spike HTP detection and growth correlation among the ground truth, the algorithm developed by Qiongyan et al., and the proposed algorithm are discussed in this paper. The proposed algorithm was also capable of significantly reducing the error rate of the HTP detection by 75% and improving the accuracy of spike area estimation by 50% in comparison with the Qionagyan et al. method. With these algorithmic improvements, HTP detection on a diverse set of 369 plants was performed in a high-throughput manner. This analysis demonstrated that the HTP of 104 plants (comprises of 57 genotypes) with lower biomass and tillering range (e.g., earlier-heading types) were correctly determined. However, fine-tuning or extension of the developed method is required for high biomass plants where spike emerges within green bushes. In conclusion, our proposed method allows significantly more reliable results for HTP detection and spike growth analysis to be achieved in application to European cultivars with earlier-heading types.

15.
Clin Biomech (Bristol, Avon) ; 71: 86-91, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31707189

RESUMEN

BACKGROUND: Surgical treatment of endocrine orbitopathy can be performed by way of resecting orbital walls, which effectively releases superfluous tissue from the surgically enlarged orbital space allowing the eyeballs to move back. Existing approaches aim to select an optimal surgical strategy based on statistical correlations between the extent of the surgical procedure and the resulting bulbus displacement but do not provide an individual surgery plan or predict surgery outcome. METHODS: In this retrospective study, we performed a quantitative analysis of pre- and post-surgery 3D tomographic data of six patients and applied explorative biomechanical modeling of orbital mechanics to dissect factors influencing patient-specific outcome. FINDINGS: Our experimental results showed a large variability of the backward eyeball displacement in dependency on the amount of orbital volume flow, which could partially be described by computational simulation. Our detailed analysis revealed that patients with regular fat tissue show a good correlation between bulbus displacement and relative volume of decompressed tissue, which, in turn, correlates with decrease in hydrostatic pressure. In contrast, patients with fibrotic tissue exhibit significantly reduced and computationally less predictable eyeball translation in response to surgical tissue decompression. INTERPRETATION: Based on the results of this study we see a great potential for quantitative planning of surgical exophthalmos treatment using 3D biomechanical modeling. Conventional approaches to planning of soft tissue interventions consider, however, only the patient's 3D anatomy and widely disregard individual tissue properties. Further investigations are required to establish reliable procedures for assessment of individual tissue properties and incorporating them into patient-specific models of orbital mechanics.


Asunto(s)
Tejido Adiposo/cirugía , Descompresión Quirúrgica , Exoftalmia/cirugía , Oftalmopatía de Graves/cirugía , Órbita/cirugía , Adulto , Fenómenos Biomecánicos , Simulación por Computador , Diagnóstico por Computador , Ojo , Femenino , Fibrosis/cirugía , Humanos , Imagenología Tridimensional , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Estudios Retrospectivos
16.
Sci Rep ; 9(1): 19674, 2019 12 23.
Artículo en Inglés | MEDLINE | ID: mdl-31873104

RESUMEN

Quantitative characterization of root system architecture and its development is important for the assessment of a complete plant phenotype. To enable high-throughput phenotyping of plant roots efficient solutions for automated image analysis are required. Since plants naturally grow in an opaque soil environment, automated analysis of optically heterogeneous and noisy soil-root images represents a challenging task. Here, we present a user-friendly GUI-based tool for semi-automated analysis of soil-root images which allows to perform an efficient image segmentation using a combination of adaptive thresholding and morphological filtering and to derive various quantitative descriptors of the root system architecture including total length, local width, projection area, volume, spatial distribution and orientation. The results of our semi-automated root image segmentation are in good conformity with the reference ground-truth data (mean dice coefficient = 0.82) compared to IJ_Rhizo and GiAroots. Root biomass values calculated with our tool within a few seconds show a high correlation (Pearson coefficient = 0.8) with the results obtained using conventional, pure manual segmentation approaches. Equipped with a number of adjustable parameters and optional correction tools our software is capable of significantly accelerating quantitative analysis and phenotyping of soil-, agar- and washed root images.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Raíces de Plantas/anatomía & histología , Algoritmos , Arabidopsis/anatomía & histología , Gráficos por Computador , Ensayos Analíticos de Alto Rendimiento , Fenotipo , Programas Informáticos , Suelo , Interfaz Usuario-Computador
17.
PLoS One ; 14(9): e0221203, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31568494

RESUMEN

With the introduction of multi-camera systems in modern plant phenotyping new opportunities for combined multimodal image analysis emerge. Visible light (VIS), fluorescence (FLU) and near-infrared images enable scientists to study different plant traits based on optical appearance, biochemical composition and nutrition status. A straightforward analysis of high-throughput image data is hampered by a number of natural and technical factors including large variability of plant appearance, inhomogeneous illumination, shadows and reflections in the background regions. Consequently, automated segmentation of plant images represents a big challenge and often requires an extensive human-machine interaction. Combined analysis of different image modalities may enable automatisation of plant segmentation in "difficult" image modalities such as VIS images by utilising the results of segmentation of image modalities that exhibit higher contrast between plant and background, i.e. FLU images. For efficient segmentation and detection of diverse plant structures (i.e. leaf tips, flowers), image registration techniques based on feature point (FP) matching are of particular interest. However, finding reliable feature points and point pairs for differently structured plant species in multimodal images can be challenging. To address this task in a general manner, different feature point detectors should be considered. Here, a comparison of seven different feature point detectors for automated registration of VIS and FLU plant images is performed. Our experimental results show that straightforward image registration using FP detectors is prone to errors due to too large structural difference between FLU and VIS modalities. We show that structural image enhancement such as background filtering and edge image transformation significantly improves performance of FP algorithms. To overcome the limitations of single FP detectors, combination of different FP methods is suggested. We demonstrate application of our enhanced FP approach for automated registration of a large amount of FLU/VIS images of developing plant species acquired from high-throughput phenotyping experiments.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Plantas/anatomía & histología , Algoritmos , Clorofila/metabolismo , Fluorescencia , Humanos , Procesamiento de Imagen Asistido por Computador/estadística & datos numéricos , Iluminación , Fenotipo , Fotograbar/métodos , Desarrollo de la Planta , Hojas de la Planta/anatomía & histología , Hojas de la Planta/metabolismo , Plantas/metabolismo
18.
Plant Methods ; 15: 44, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31168314

RESUMEN

With the introduction of high-throughput multisensory imaging platforms, the automatization of multimodal image analysis has become the focus of quantitative plant research. Due to a number of natural and technical reasons (e.g., inhomogeneous scene illumination, shadows, and reflections), unsupervised identification of relevant plant structures (i.e., image segmentation) represents a nontrivial task that often requires extensive human-machine interaction. Registration of multimodal plant images enables the automatized segmentation of 'difficult' image modalities such as visible light or near-infrared images using the segmentation results of image modalities that exhibit higher contrast between plant and background regions (such as fluorescent images). Furthermore, registration of different image modalities is essential for assessment of a consistent multiparametric plant phenotype, where, for example, chlorophyll and water content as well as disease- and/or stress-related pigmentation can simultaneously be studied at a local scale. To automatically register thousands of images, efficient algorithmic solutions for the unsupervised alignment of two structurally similar but, in general, nonidentical images are required. For establishment of image correspondences, different algorithmic approaches based on different image features have been proposed. The particularity of plant image analysis consists, however, of a large variability of shapes and colors of different plants measured at different developmental stages from different views. While adult plant shoots typically have a unique structure, young shoots may have a nonspecific shape that can often be hardly distinguished from the background structures. Consequently, it is not clear a priori what image features and registration techniques are suitable for the alignment of various multimodal plant images. Furthermore, dynamically measured plants may exhibit nonuniform movements that require application of nonrigid registration techniques. Here, we investigate three common techniques for registration of visible light and fluorescence images that rely on finding correspondences between (i) feature-points, (ii) frequency domain features, and (iii) image intensity information. The performance of registration methods is validated in terms of robustness and accuracy measured by a direct comparison with manually segmented images of different plants. Our experimental results show that all three techniques are sensitive to structural image distortions and require additional preprocessing steps including structural enhancement and characteristic scale selection. To overcome the limitations of conventional approaches, we develop an iterative algorithmic scheme, which allows it to perform both rigid and slightly nonrigid registration of high-throughput plant images in a fully automated manner.

19.
Viruses ; 12(1)2019 12 28.
Artículo en Inglés | MEDLINE | ID: mdl-31905685

RESUMEN

Chronic Hepatitis C virus (HCV) infection still constitutes a major global health problem with almost half a million deaths per year. To date, the human hepatoma cell line Huh7 and its derivatives is the only cell line that robustly replicates HCV. However, even different subclones and passages of this single cell line exhibit tremendous differences in HCV replication efficiency. By comparative gene expression profiling using a multi-pronged correlation analysis across eight different Huh7 variants, we identified 34 candidate host factors possibly affecting HCV permissiveness. For seven of the candidates, we could show by knock-down studies their implication in HCV replication. Notably, for at least four of them, we furthermore found that overexpression boosted HCV replication in lowly permissive Huh7 cells, most prominently for the histone-binding transcriptional repressor THAP7 and the nuclear receptor NR0B2. For NR0B2, our results suggest a finely balanced expression optimum reached in highly permissive Huh7 cells, with even higher levels leading to a nearly complete breakdown of HCV replication, likely due to a dysregulation of bile acid and cholesterol metabolism. Our unbiased expression-profiling approach, hence, led to the identification of four host cellular genes that contribute to HCV permissiveness in Huh7 cells. These findings add to an improved understanding of the molecular underpinnings of the strict host cell tropism of HCV.


Asunto(s)
Perfilación de la Expresión Génica , Hepacivirus/genética , Interacciones Microbiota-Huesped/genética , Tropismo Viral , Replicación Viral/genética , Carcinoma Hepatocelular/virología , Línea Celular Tumoral , Hepacivirus/fisiología , Humanos , Neoplasias Hepáticas/virología , Cristalinas mu
20.
Front Plant Sci ; 9: 1519, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30464765

RESUMEN

Modern facilities for high-throughput phenotyping provide plant scientists with a large amount of multi-modal image data. Combination of different image modalities is advantageous for image segmentation, quantitative trait derivation, and assessment of a more accurate and extended plant phenotype. However, visible light (VIS), fluorescence (FLU), and near-infrared (NIR) images taken with different cameras from different view points in different spatial resolutions exhibit not only relative geometrical transformations but also considerable structural differences that hamper a straightforward alignment and combined analysis of multi-modal image data. Conventional techniques of image registration are predominantly tailored to detection of relative geometrical transformations between two otherwise identical images, and become less accurate when applied to partially similar optical scenes. Here, we focus on a relatively new technical problem of FLU/VIS plant image registration. We present a framework for automated alignment of FLU/VIS plant images which is based on extension of the phase correlation (PC) approach - a frequency domain technique for image alignment, which relies on detection of a phase shift between two Fourier-space transforms. Primarily tailored to detection of affine image transformations between two structurally identical images, PC is known to be sensitive to structural image distortions. We investigate effects of image preprocessing and scaling on accuracy of image registration and suggest an integrative algorithmic scheme which allows to overcome shortcomings of conventional single-step PC by application to non-identical multi-modal images. Our experimental tests with FLU/VIS images of different plant species taken on different phenotyping facilities at different developmental stages, including difficult cases such as small plant shoots of non-specific shape and non-uniformly moving leaves, demonstrate improved performance of our extended PC approach within the scope of high-throughput plant phenotyping.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...